105 research outputs found

    Fast Gibbs sampling for high-dimensional Bayesian inversion

    Get PDF
    Solving ill-posed inverse problems by Bayesian inference has recently attracted considerable attention. Compared to deterministic approaches, the probabilistic representation of the solution by the posterior distribution can be exploited to explore and quantify its uncertainties. In applications where the inverse solution is subject to further analysis procedures can be a significant advantage. Alongside theoretical progress, various new computational techniques allow us to sample very high dimensional posterior distributions: in (Lucka 2012 Inverse Problems 28 125012), and a Markov chain Monte Carlo posterior sampler was developed for linear inverse problems with â„“1{{\ell }}_{1}-type priors. In this article, we extend this single component (SC) Gibbs-type sampler to a wide range of priors used in Bayesian inversion, such as general â„“pq{{\ell }}_{p}^{q} priors with additional hard constraints. In addition, a fast computation of the conditional, SC densities in an explicit, parameterized form, a fast, robust and exact sampling from these one-dimensional densities is key to obtain an efficient algorithm. We demonstrate that a generalization of slice sampling can utilize their specific structure for this task and illustrate the performance of the resulting slice-within-Gibbs samplers by different computed examples. These new samplers allow us to perform sample-based Bayesian inference in high-dimensional scenarios with certain priors for the first time, including the inversion of computed tomography data with the popular isotropic total variation prior

    Sparse Bayesian Inference & Uncertainty Quantification for Inverse Imaging Problems

    Get PDF
    During the last two decades, sparsity has emerged as a key concept to solve linear and non-linear ill-posed inverse problems, in particular for severely ill-posed problems and applications with incomplete, sub-sampled data. At the same time, there is a growing demand to obtain quantitative instead of just qualitative inverse results together with a systematic assessment of their uncertainties (Uncertainty quantification, UQ). Bayesian inference seems like a suitable framework to combine sparsity and UQ but its application to large-scale inverse problems resulting from fine discretizations of PDE models leads to severe computational and conceptional challenges. In this talk, we will focus on two different Bayesian approaches to model sparsity as a-priori information: Via convex, but non-smooth prior energies such as total variation and Besov space priors and via non-convex but smooth priors arising from hierarchical Bayesian modeling. To illustrate our findings, we will rely on experimental data from challenging biomedical imaging applications such as EEG/MEG source localization and limited-angle CT. We want to share the experiences, results we obtained and the open questions we face from our perspective as researchers coming from a background in biomedical imaging rather than in statistics and hope to stimulate a fruitful discussion for both sides

    Fast Gibbs sampling for high-dimensional Bayesian inversion

    Get PDF
    Solving ill-posed inverse problems by Bayesian inference has recently attracted considerable attention. Compared to deterministic approaches, the probabilistic representation of the solution by the posterior distribution can be exploited to explore and quantify its uncertainties. In applications where the inverse solution is subject to further analysis procedures, this can be a significant advantage. Alongside theoretical progress, various new computational techniques allow to sample very high dimensional posterior distributions: In [Lucka2012], a Markov chain Monte Carlo (MCMC) posterior sampler was developed for linear inverse problems with â„“1\ell_1-type priors. In this article, we extend this single component Gibbs-type sampler to a wide range of priors used in Bayesian inversion, such as general â„“pq\ell_p^q priors with additional hard constraints. Besides a fast computation of the conditional, single component densities in an explicit, parameterized form, a fast, robust and exact sampling from these one-dimensional densities is key to obtain an efficient algorithm. We demonstrate that a generalization of slice sampling can utilize their specific structure for this task and illustrate the performance of the resulting slice-within-Gibbs samplers by different computed examples. These new samplers allow us to perform sample-based Bayesian inference in high-dimensional scenarios with certain priors for the first time, including the inversion of computed tomography (CT) data with the popular isotropic total variation (TV) prior.Comment: submitted to "Inverse Problems

    Bayesian inversion in biomedical imaging

    Full text link
    Biomedizinische Bildgebung ist zu einer Schlüsseltechnik geworden, Struktur oder Funktion lebender Organismen nicht-invasiv zu untersuchen. Relevante Informationen aus den gemessenen Daten zu rekonstruieren erfordert neben mathematischer Modellierung und numerischer Simulation das verlässliche Lösen schlecht gestellter inverser Probleme. Um dies zu erreichen müssen zusätzliche a-priori Informationen über die zu rekonstruierende Größe formuliert und in die algorithmischen Lösungsverfahren einbezogen werden. Bayesianische Invertierung ist eine spezielle mathematische Methodik dies zu tun. Die vorliegende Arbeit entwickelt eine aktuelle Übersicht Bayesianischer Invertierung und demonstriert die vorgestellten Konzepte und Algorithmen in verschiedenen numerischen Studien, darunter anspruchsvolle Anwendungen aus der biomedizinischen Bildgebung mit experimentellen Daten. Ein Schwerpunkt liegt dabei auf der Verwendung von Dünnbesetztheit/Sparsity als a-priori Information.Biomedical imaging techniques became a key technology to assess the structure or function of living organisms in a non-invasive way. Besides innovations in the instrumentation, the development of new and improved methods for processing and analysis of the measured data has become a vital field of research. Building on traditional signal processing, this area nowadays also comprises mathematical modeling, numerical simulation and inverse problems. The latter describes the reconstruction of quantities of interest from measured data and a given generative model. Unfortunately, most inverse problems are ill-posed, which means that a robust and reliable reconstruction is not possible unless additional a-priori information on the quantity of interest is incorporated into the solution method. Bayesian inversion is a mathematical methodology to formulate and employ a-priori information in computational schemes to solve the inverse problem. This thesis develops a recent overview on Bayesian inversion and exemplifies the presented concepts and algorithms in various numerical studies including challenging biomedical imaging applications with experimental data. A particular focus is on using sparsity as a-priori information within the Bayesian framework. <br

    Photoacoustic imaging with a multi-view Fabry-Perot scanner

    Get PDF
    Planar Fabry-PĂ©rot (FP) ultrasound sensor arrays have been used to produce in-vivo photoacoustic images of high quality due to their broad detection bandwidth, small element size, and dense spatial sampling. However like all planar arrays, FP sensors suffer from the limited view problem. Here, a multi-angle FP sensor system is described that mitigates the partial view effects of a planar FP sensor while retaining its detection advantages. The possibility of improving data acquisition speed through the use of sub-sampling techniques is also explored. The capabilities of the system are demonstrated with 3D images of pre-clinical targets

    Never look back - A modified EnKF method and its application to the training of neural networks without back propagation

    Get PDF
    In this work, we present a new derivative-free optimization method and investigate its use for training neural networks. Our method is motivated by the Ensemble Kalman Filter (EnKF), which has been used successfully for solving optimization problems that involve large-scale, highly nonlinear dynamical systems. A key benefit of the EnKF method is that it requires only the evaluation of the forward propagation but not its derivatives. Hence, in the context of neural networks, it alleviates the need for back propagation and reduces the memory consumption dramatically. However, the method is not a pure "black-box" global optimization heuristic as it efficiently utilizes the structure of typical learning problems. Promising first results of the EnKF for training deep neural networks have been presented recently by Kovachki and Stuart. We propose an important modification of the EnKF that enables us to prove convergence of our method to the minimizer of a strongly convex function. Our method also bears similarity with implicit filtering and we demonstrate its potential for minimizing highly oscillatory functions using a simple example. Further, we provide numerical examples that demonstrate the potential of our method for training deep neural networks

    Equivalent-source acoustic holography for projecting measured ultrasound fields through complex media

    Get PDF
    Holographic projections of experimental ultrasound measurements generally use the angular spectrum method or Rayleigh integral, where the measured data is imposed as a Dirichlet boundary condition. In contrast, full-wave models, which can account for more complex wave behaviour, often use interior mass or velocity sources to introduce acoustic energy into the simulation. Here, a method to generate an equivalent interior source that reproduces the measurement data is proposed based on gradient-based optimisation. The equivalent-source can then be used with full-wave models (for example, the open-source k-Wave toolbox) to compute holographic projections through complex media including nonlinearity and heterogeneous material properties. Numerical and experimental results using both time-domain and continuous-wave sources are used to demonstrate the accuracy of the approach

    Using reciprocity for relating the simulation of transcranial current stimulation to the EEG forward problem

    Get PDF
    To explore the relationship between transcranial current stimulation (tCS) and the electroencephalography (EEG) forward problem, we investigate and compare accuracy and efficiency of a reciprocal and a direct EEG forward approach for dipolar primary current sources both based on the finite element method (FEM), namely the adjoint approach (AA) and the partial integration approach in conjunction with a transfer matrix concept (PI). By analyzing numerical results, comparing to analytically derived EEG forward potentials and estimating computational complexity in spherical shell models, AA turns out to be essentially identical to PI. It is then proven that AA and PI are also algebraically identical even for general head models. This relation offers a direct link between the EEG forward problem and tCS. We then demonstrate how the quasi-analytical EEG forward solutions in sphere models can be used to validate the numerical accuracies of FEM-based tCS simulation approaches. These approaches differ with respect to the ease with which they can be employed for realistic head modeling based on MRI-derived segmentations. We show that while the accuracy of the most easy to realize approach based on regular hexahedral elements is already quite high, it can be significantly improved if a geometry-adaptation of the elements is employed in conjunction with an isoparametric FEM approach. While the latter approach does not involve any additional difficulties for the user, it reaches the high accuracies of surface-segmentation based tetrahedral FEM, which is considerably more difficult to implement and topologically less flexible in practice. Finally, in a highly realistic head volume conductor model and when compared to the regular alternative, the geometry-adapted hexahedral FEM is shown to result in significant changes in tCS current flow orientation and magnitude up to 45° and a factor of 1.66, respectively

    Improved EEG source localization with Bayesian uncertainty modelling of unknown skull conductivity

    Get PDF
    Electroencephalography (EEG) source imaging is an ill-posed inverse problem that requires accurate conductivity modelling of the head tissues, especially the skull. Unfortunately, the conductivity values are difficult to determine in vivo. In this paper, we show that the exact knowledge of the skull conductivity is not always necessary when the Bayesian approximation error (BAE) approach is exploited. In BAE, we first postulate a probability

    A cone-beam X-ray computed tomography data collection designed for machine learning

    Get PDF
    Unlike previous works, this open data collection consists of X-ray cone-beam (CB) computed tomography (CT) datasets specifically designed for machine learning applications and high cone-angle artefact reduction. Forty-two walnuts were scanned with a laboratory X-ray set-up to provide not only data from a single object but from a class of objects with natural variability. For each walnut, CB projections on three different source orbits were acquired to provide CB data with different cone angles as well as being able to compute artefact-free, high-quality ground truth images from the combined data that can be used for supervised learning. We provide the complete image reconstruction pipeline: raw projection data, a description of the scanning geometry, pre-processing and reconstruction scripts using open software, and the reconstructed volumes. Due to this, the dataset can not only be used for high cone-angle artefact reduction but also for algorithm development and evaluation for other tasks, such as image reconstruction from limited or sparse-angle (low-dose) scanning, super resolution, or segmentation
    • …
    corecore